基于图像检索的应用需要在中间空间中进行编辑和关联,这些空间代表了诸如对象及其关系的高级概念,而不是密集的像素级表示,例如RGB图像或语义标签图。我们专注于这样的表示形式,场景图,并提出了一个新颖的场景扩展任务,在其中我们通过添加新节点(对象)和相应的关系来丰富输入种子图。为此,我们将场景图扩展作为一个顺序预测任务,涉及首先预测新节点,然后预测图中新预测的节点和以前的节点之间的一系列关系的多个步骤。我们为观察到的图表提出了一个测序策略,该图形保留了节点之间的聚类模式。此外,我们利用外部知识来训练我们的图生成模型,从而对节点预测进行更大的概括。由于现有的最大平均差异(MMD)指标的效率低下,用于评估节点之间的预测关系(对象),因此我们设计了新颖的指标,可以全面评估预测关系的不同方面。我们对视觉基因组和VRD数据集进行了广泛的实验,以使用标准的基于MMD的指标和我们建议的指标来评估扩展的场景图。我们观察到,与GraphRNN这样的基线方法,通过我们的方法,GEM,GEMS生成的图形更好地表示场景图的真实分布。
translated by 谷歌翻译
我们提出了一种互动地控制静止图像中的流体元素的动画的方法,以产生阴影。具体而言,我们专注于水,烟雾,火的流体元素的动画,具有重复纹理和连续流体运动的性质。从先前作品中采取灵感,我们代表了恒定的2D光学流程图的形式中这种流体元件的运动。为此,我们允许用户提供任何数量的箭头方向及其相关速度以及用户想要动画的区域的掩码。然后,用户提供的输入箭头方向,它们对应的速度值和掩模被转换成表示恒定光学流程图(FD)的致密流图。我们观察到使用简单指数操作获得的FD可以密切地近似图像中元素的合理运动。我们进一步使用生成 - 对冲网络(GaN)来改进计算的密集光学流程图FD以获得更现实的流程图。我们通过在不同分辨率下向前翘曲输入图像特征来设计新的UNET基于基于UNET的架构来自动生成未来的帧,通过转发输入图像特征。我们在公开的数据集中进行广泛的实验,并表明我们的方法在定性和定量度量方面优于基线。此外,我们向培训集中不存在的方向上显示了对象的定性动画,并提供了一种综合视频的方法,否则在现实世界中不会存在。
translated by 谷歌翻译
The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.
translated by 谷歌翻译
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
translated by 谷歌翻译
Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world environments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional procedural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materials. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's diverse distribution of generated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrangement, lighting changes, or clutter.
translated by 谷歌翻译
Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译
Remote sensing images are useful for a wide variety of environmental and earth monitoring tasks, including tracking deforestation, illegal fishing, urban expansion, and natural disasters. The earth is extremely diverse -- the amount of potential tasks in remote sensing images is massive, and the sizes of features range from several kilometers to just tens of centimeters. However, creating generalizable computer vision methods is a challenge in part due to the lack of a large-scale dataset that captures these diverse features for many tasks. In this paper, we present Satlas, a remote sensing dataset and benchmark that is large in both breadth, featuring all of the aforementioned applications and more, as well as scale, comprising 290M labels under 137 categories and seven label modalities. We evaluate eight baselines and a proposed method on Satlas, and find that there is substantial room for improvement in addressing research challenges specific to remote sensing, including processing image time series that consist of images from very different types of sensors, and taking advantage of long-range spatial context. We also find that pre-training on Satlas substantially improves performance on downstream tasks with few labeled examples, increasing average accuracy by 16% over ImageNet and 5% over the next best baseline.
translated by 谷歌翻译
Many high-level skills that are required for computer vision tasks, such as parsing questions, comparing and contrasting semantics, and writing descriptions, are also required in other domains such as natural language processing. In this paper, we ask whether this makes it possible to learn those skills from text data and then use them to complete vision tasks without ever training on visual training data. Key to our approach is exploiting the joint embedding space of contrastively trained vision and language encoders. In practice, there can be systematic differences between embedding spaces for different modalities in contrastive models, and we analyze how these differences affect our approach and study a variety of strategies to mitigate this concern. We produce models using only text training data on three tasks: image captioning, visual entailment and visual question answering, and evaluate them on standard benchmarks using images. We find that this kind of transfer is possible and results in only a small drop in performance relative to models trained on images. We also showcase a variety of stylistic image captioning models that were trained using no image data and no human-curated language data, but instead text data from books, the web, or language models.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
结肠镜检查是一种常规门诊手术,用于检查结肠和直肠的任何异常,包括息肉,憩室和结肠结构的狭窄。临床医生的大量时间用于在结肠镜检查过程中拍摄的快照,以维持医疗记录或进一步研究。自动化此步骤可以节省时间并提高流程的效率。在我们的工作中,我们收集了一个由专家注释的过程中的120个结肠镜检查视频和2416张快照的数据集。此外,我们开发了一种基于新颖的,视觉转化器的地标检测算法,该算法可以从结肠镜检查过程中鉴定出关键的解剖标志(阑尾孔,回肠瓣膜/盲肠地标和直肠翻新)。我们的算法在预处理过程中使用自适应伽马校正,以保持所有图像的一致亮度。然后,我们将视觉变压器用作特征提取主链和完全连接的基于网络的分类器头,将给定的框架分为四个类:三个地标或非地标框架。我们将视觉变压器(VIT-B/16)主链与RESNET-101和Convnext-B骨干进行了比较,这些骨干和Convnext-B骨干也接受了类似训练。我们报告了快照的测试数据集上的视觉变压器主链的精度为82%。
translated by 谷歌翻译